hello
hello
Labels

📌S Retain class distribution for seed 4:
Class 0: 4500
Class 1: 4500
Class 2: 4500
Class 3: 4500
Class 4: 4500
Class 5: 4500
Class 6: 4500
Class 7: 4500
Class 8: 4500
Class 9: 4500

📌S Forget class distribution for seed 4:
Class 0: 500
Class 1: 500
Class 2: 500
Class 3: 500
Class 4: 500
Class 5: 500
Class 6: 500
Class 7: 500
Class 8: 500
Class 9: 500
78090990

📊 Updated class distribution:
Retain set:
  Class 0: 4750
  Class 1: 4750
  Class 2: 4750
  Class 3: 4750
  Class 4: 4750
  Class 5: 4750
  Class 6: 4750
  Class 7: 4750
  Class 8: 4750
  Class 9: 4750
Forget set:
  Class 0: 250
  Class 1: 250
  Class 2: 250
  Class 3: 250
  Class 4: 250
  Class 5: 250
  Class 6: 250
  Class 7: 250
  Class 8: 250
  Class 9: 250
hello
hello
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/47500]	Loss: 2.3232	LR: 0.000000
Training Epoch: 1 [512/47500]	Loss: 2.3281	LR: 0.000538
Training Epoch: 1 [768/47500]	Loss: 2.3210	LR: 0.001075
Training Epoch: 1 [1024/47500]	Loss: 2.3332	LR: 0.001613
Training Epoch: 1 [1280/47500]	Loss: 2.3049	LR: 0.002151
Training Epoch: 1 [1536/47500]	Loss: 2.2753	LR: 0.002688
Training Epoch: 1 [1792/47500]	Loss: 2.2538	LR: 0.003226
Training Epoch: 1 [2048/47500]	Loss: 2.2106	LR: 0.003763
Training Epoch: 1 [2304/47500]	Loss: 2.2622	LR: 0.004301
Training Epoch: 1 [2560/47500]	Loss: 2.1237	LR: 0.004839
Training Epoch: 1 [2816/47500]	Loss: 2.1528	LR: 0.005376
Training Epoch: 1 [3072/47500]	Loss: 2.0810	LR: 0.005914
Training Epoch: 1 [3328/47500]	Loss: 2.1330	LR: 0.006452
Training Epoch: 1 [3584/47500]	Loss: 2.1105	LR: 0.006989
Training Epoch: 1 [3840/47500]	Loss: 1.9605	LR: 0.007527
Training Epoch: 1 [4096/47500]	Loss: 1.9225	LR: 0.008065
Training Epoch: 1 [4352/47500]	Loss: 1.9326	LR: 0.008602
Training Epoch: 1 [4608/47500]	Loss: 1.8686	LR: 0.009140
Training Epoch: 1 [4864/47500]	Loss: 1.8138	LR: 0.009677
Training Epoch: 1 [5120/47500]	Loss: 1.8465	LR: 0.010215
Training Epoch: 1 [5376/47500]	Loss: 1.9293	LR: 0.010753
Training Epoch: 1 [5632/47500]	Loss: 1.8657	LR: 0.011290
Training Epoch: 1 [5888/47500]	Loss: 1.7383	LR: 0.011828
Training Epoch: 1 [6144/47500]	Loss: 1.7761	LR: 0.012366
Training Epoch: 1 [6400/47500]	Loss: 1.7636	LR: 0.012903
Training Epoch: 1 [6656/47500]	Loss: 1.8473	LR: 0.013441
Training Epoch: 1 [6912/47500]	Loss: 1.6025	LR: 0.013978
Training Epoch: 1 [7168/47500]	Loss: 1.8306	LR: 0.014516
Training Epoch: 1 [7424/47500]	Loss: 1.7366	LR: 0.015054
Training Epoch: 1 [7680/47500]	Loss: 1.6503	LR: 0.015591
Training Epoch: 1 [7936/47500]	Loss: 1.6998	LR: 0.016129
Training Epoch: 1 [8192/47500]	Loss: 1.7291	LR: 0.016667
Training Epoch: 1 [8448/47500]	Loss: 1.7167	LR: 0.017204
Training Epoch: 1 [8704/47500]	Loss: 1.7516	LR: 0.017742
Training Epoch: 1 [8960/47500]	Loss: 1.6548	LR: 0.018280
Training Epoch: 1 [9216/47500]	Loss: 1.7440	LR: 0.018817
Training Epoch: 1 [9472/47500]	Loss: 1.6064	LR: 0.019355
Training Epoch: 1 [9728/47500]	Loss: 1.6247	LR: 0.019892
Training Epoch: 1 [9984/47500]	Loss: 1.6438	LR: 0.020430
Training Epoch: 1 [10240/47500]	Loss: 1.7595	LR: 0.020968
Training Epoch: 1 [10496/47500]	Loss: 1.6119	LR: 0.021505
Training Epoch: 1 [10752/47500]	Loss: 1.8000	LR: 0.022043
Training Epoch: 1 [11008/47500]	Loss: 1.7579	LR: 0.022581
Training Epoch: 1 [11264/47500]	Loss: 1.6322	LR: 0.023118
Training Epoch: 1 [11520/47500]	Loss: 1.6570	LR: 0.023656
Training Epoch: 1 [11776/47500]	Loss: 1.5901	LR: 0.024194
Training Epoch: 1 [12032/47500]	Loss: 1.5640	LR: 0.024731
Training Epoch: 1 [12288/47500]	Loss: 1.6483	LR: 0.025269
Training Epoch: 1 [12544/47500]	Loss: 1.6413	LR: 0.025806
Training Epoch: 1 [12800/47500]	Loss: 1.5111	LR: 0.026344
Training Epoch: 1 [13056/47500]	Loss: 1.5380	LR: 0.026882
Training Epoch: 1 [13312/47500]	Loss: 1.7154	LR: 0.027419
Training Epoch: 1 [13568/47500]	Loss: 1.6888	LR: 0.027957
Training Epoch: 1 [13824/47500]	Loss: 1.5690	LR: 0.028495
Training Epoch: 1 [14080/47500]	Loss: 1.6381	LR: 0.029032
Training Epoch: 1 [14336/47500]	Loss: 1.6486	LR: 0.029570
Training Epoch: 1 [14592/47500]	Loss: 1.5642	LR: 0.030108
Training Epoch: 1 [14848/47500]	Loss: 1.6839	LR: 0.030645
Training Epoch: 1 [15104/47500]	Loss: 1.6109	LR: 0.031183
Training Epoch: 1 [15360/47500]	Loss: 1.5687	LR: 0.031720
Training Epoch: 1 [15616/47500]	Loss: 1.6111	LR: 0.032258
Training Epoch: 1 [15872/47500]	Loss: 1.4635	LR: 0.032796
Training Epoch: 1 [16128/47500]	Loss: 1.6330	LR: 0.033333
Training Epoch: 1 [16384/47500]	Loss: 1.6623	LR: 0.033871
Training Epoch: 1 [16640/47500]	Loss: 1.4738	LR: 0.034409
Training Epoch: 1 [16896/47500]	Loss: 1.4989	LR: 0.034946
Training Epoch: 1 [17152/47500]	Loss: 1.7701	LR: 0.035484
Training Epoch: 1 [17408/47500]	Loss: 1.5363	LR: 0.036022
Training Epoch: 1 [17664/47500]	Loss: 1.5016	LR: 0.036559
Training Epoch: 1 [17920/47500]	Loss: 1.5879	LR: 0.037097
Training Epoch: 1 [18176/47500]	Loss: 1.6605	LR: 0.037634
Training Epoch: 1 [18432/47500]	Loss: 1.6095	LR: 0.038172
Training Epoch: 1 [18688/47500]	Loss: 1.6336	LR: 0.038710
Training Epoch: 1 [18944/47500]	Loss: 1.4182	LR: 0.039247
Training Epoch: 1 [19200/47500]	Loss: 1.7213	LR: 0.039785
Training Epoch: 1 [19456/47500]	Loss: 1.3631	LR: 0.040323
Training Epoch: 1 [19712/47500]	Loss: 1.4426	LR: 0.040860
Training Epoch: 1 [19968/47500]	Loss: 1.4331	LR: 0.041398
Training Epoch: 1 [20224/47500]	Loss: 1.4424	LR: 0.041935
Training Epoch: 1 [20480/47500]	Loss: 1.4358	LR: 0.042473
Training Epoch: 1 [20736/47500]	Loss: 1.4713	LR: 0.043011
Training Epoch: 1 [20992/47500]	Loss: 1.5114	LR: 0.043548
Training Epoch: 1 [21248/47500]	Loss: 1.4726	LR: 0.044086
Training Epoch: 1 [21504/47500]	Loss: 1.4541	LR: 0.044624
Training Epoch: 1 [21760/47500]	Loss: 1.2970	LR: 0.045161
Training Epoch: 1 [22016/47500]	Loss: 1.4438	LR: 0.045699
Training Epoch: 1 [22272/47500]	Loss: 1.3083	LR: 0.046237
Training Epoch: 1 [22528/47500]	Loss: 1.3209	LR: 0.046774
Training Epoch: 1 [22784/47500]	Loss: 1.3838	LR: 0.047312
Training Epoch: 1 [23040/47500]	Loss: 1.4748	LR: 0.047849
Training Epoch: 1 [23296/47500]	Loss: 1.3540	LR: 0.048387
Training Epoch: 1 [23552/47500]	Loss: 1.2837	LR: 0.048925
Training Epoch: 1 [23808/47500]	Loss: 1.5005	LR: 0.049462
Training Epoch: 1 [24064/47500]	Loss: 1.5536	LR: 0.050000
Training Epoch: 1 [24320/47500]	Loss: 1.5412	LR: 0.050538
Training Epoch: 1 [24576/47500]	Loss: 1.4461	LR: 0.051075
Training Epoch: 1 [24832/47500]	Loss: 1.3048	LR: 0.051613
Training Epoch: 1 [25088/47500]	Loss: 1.5693	LR: 0.052151
Training Epoch: 1 [25344/47500]	Loss: 1.5664	LR: 0.052688
Training Epoch: 1 [25600/47500]	Loss: 1.4872	LR: 0.053226
Training Epoch: 1 [25856/47500]	Loss: 1.3209	LR: 0.053763
Training Epoch: 1 [26112/47500]	Loss: 1.5213	LR: 0.054301
Training Epoch: 1 [26368/47500]	Loss: 1.3753	LR: 0.054839
Training Epoch: 1 [26624/47500]	Loss: 1.4927	LR: 0.055376
Training Epoch: 1 [26880/47500]	Loss: 1.6221	LR: 0.055914
Training Epoch: 1 [27136/47500]	Loss: 1.4548	LR: 0.056452
Training Epoch: 1 [27392/47500]	Loss: 1.5679	LR: 0.056989
Training Epoch: 1 [27648/47500]	Loss: 1.5284	LR: 0.057527
Training Epoch: 1 [27904/47500]	Loss: 1.3771	LR: 0.058065
Training Epoch: 1 [28160/47500]	Loss: 1.4441	LR: 0.058602
Training Epoch: 1 [28416/47500]	Loss: 1.4381	LR: 0.059140
Training Epoch: 1 [28672/47500]	Loss: 1.3770	LR: 0.059677
Training Epoch: 1 [28928/47500]	Loss: 1.3994	LR: 0.060215
Training Epoch: 1 [29184/47500]	Loss: 1.3367	LR: 0.060753
Training Epoch: 1 [29440/47500]	Loss: 1.4653	LR: 0.061290
Training Epoch: 1 [29696/47500]	Loss: 1.3876	LR: 0.061828
Training Epoch: 1 [29952/47500]	Loss: 1.3380	LR: 0.062366
Training Epoch: 1 [30208/47500]	Loss: 1.2450	LR: 0.062903
Training Epoch: 1 [30464/47500]	Loss: 1.4699	LR: 0.063441
Training Epoch: 1 [30720/47500]	Loss: 1.2610	LR: 0.063978
Training Epoch: 1 [30976/47500]	Loss: 1.3020	LR: 0.064516
Training Epoch: 1 [31232/47500]	Loss: 1.1732	LR: 0.065054
Training Epoch: 1 [31488/47500]	Loss: 1.2627	LR: 0.065591
Training Epoch: 1 [31744/47500]	Loss: 1.3002	LR: 0.066129
Training Epoch: 1 [32000/47500]	Loss: 1.3858	LR: 0.066667
Training Epoch: 1 [32256/47500]	Loss: 1.3220	LR: 0.067204
Training Epoch: 1 [32512/47500]	Loss: 1.4041	LR: 0.067742
Training Epoch: 1 [32768/47500]	Loss: 1.2014	LR: 0.068280
Training Epoch: 1 [33024/47500]	Loss: 1.2489	LR: 0.068817
Training Epoch: 1 [33280/47500]	Loss: 1.3277	LR: 0.069355
Training Epoch: 1 [33536/47500]	Loss: 1.3781	LR: 0.069892
Training Epoch: 1 [33792/47500]	Loss: 1.2401	LR: 0.070430
Training Epoch: 1 [34048/47500]	Loss: 1.3138	LR: 0.070968
Training Epoch: 1 [34304/47500]	Loss: 1.2978	LR: 0.071505
Training Epoch: 1 [34560/47500]	Loss: 1.1606	LR: 0.072043
Training Epoch: 1 [34816/47500]	Loss: 1.3916	LR: 0.072581
Training Epoch: 1 [35072/47500]	Loss: 1.2691	LR: 0.073118
Training Epoch: 1 [35328/47500]	Loss: 1.4013	LR: 0.073656
Training Epoch: 1 [35584/47500]	Loss: 1.2335	LR: 0.074194
Training Epoch: 1 [35840/47500]	Loss: 1.0753	LR: 0.074731
Training Epoch: 1 [36096/47500]	Loss: 1.5137	LR: 0.075269
Training Epoch: 1 [36352/47500]	Loss: 1.3214	LR: 0.075806
Training Epoch: 1 [36608/47500]	Loss: 1.1829	LR: 0.076344
Training Epoch: 1 [36864/47500]	Loss: 1.3903	LR: 0.076882
Training Epoch: 1 [37120/47500]	Loss: 1.2650	LR: 0.077419
Training Epoch: 1 [37376/47500]	Loss: 1.3145	LR: 0.077957
Training Epoch: 1 [37632/47500]	Loss: 1.3517	LR: 0.078495
Training Epoch: 1 [37888/47500]	Loss: 1.3913	LR: 0.079032
Training Epoch: 1 [38144/47500]	Loss: 1.1882	LR: 0.079570
Training Epoch: 1 [38400/47500]	Loss: 1.4142	LR: 0.080108
Training Epoch: 1 [38656/47500]	Loss: 1.2810	LR: 0.080645
Training Epoch: 1 [38912/47500]	Loss: 1.3307	LR: 0.081183
Training Epoch: 1 [39168/47500]	Loss: 1.3947	LR: 0.081720
Training Epoch: 1 [39424/47500]	Loss: 1.2084	LR: 0.082258
Training Epoch: 1 [39680/47500]	Loss: 1.4748	LR: 0.082796
Training Epoch: 1 [39936/47500]	Loss: 1.3278	LR: 0.083333
Training Epoch: 1 [40192/47500]	Loss: 1.2686	LR: 0.083871
Training Epoch: 1 [40448/47500]	Loss: 1.3268	LR: 0.084409
Training Epoch: 1 [40704/47500]	Loss: 1.2429	LR: 0.084946
Training Epoch: 1 [40960/47500]	Loss: 1.2536	LR: 0.085484
Training Epoch: 1 [41216/47500]	Loss: 1.5532	LR: 0.086022
Training Epoch: 1 [41472/47500]	Loss: 1.1644	LR: 0.086559
Training Epoch: 1 [41728/47500]	Loss: 1.2541	LR: 0.087097
Training Epoch: 1 [41984/47500]	Loss: 1.2597	LR: 0.087634
Training Epoch: 1 [42240/47500]	Loss: 1.2825	LR: 0.088172
Training Epoch: 1 [42496/47500]	Loss: 1.1685	LR: 0.088710
Training Epoch: 1 [42752/47500]	Loss: 1.1325	LR: 0.089247
Training Epoch: 1 [43008/47500]	Loss: 1.1763	LR: 0.089785
Training Epoch: 1 [43264/47500]	Loss: 1.2940	LR: 0.090323
Training Epoch: 1 [43520/47500]	Loss: 1.3234	LR: 0.090860
Training Epoch: 1 [43776/47500]	Loss: 1.2960	LR: 0.091398
Training Epoch: 1 [44032/47500]	Loss: 1.0912	LR: 0.091935
Training Epoch: 1 [44288/47500]	Loss: 1.1378	LR: 0.092473
Training Epoch: 1 [44544/47500]	Loss: 1.1113	LR: 0.093011
Training Epoch: 1 [44800/47500]	Loss: 1.2056	LR: 0.093548
Training Epoch: 1 [45056/47500]	Loss: 1.1279	LR: 0.094086
Training Epoch: 1 [45312/47500]	Loss: 1.3082	LR: 0.094624
Training Epoch: 1 [45568/47500]	Loss: 1.1234	LR: 0.095161
Training Epoch: 1 [45824/47500]	Loss: 1.0954	LR: 0.095699
Training Epoch: 1 [46080/47500]	Loss: 1.3004	LR: 0.096237
Training Epoch: 1 [46336/47500]	Loss: 1.0879	LR: 0.096774
Training Epoch: 1 [46592/47500]	Loss: 1.1214	LR: 0.097312
Training Epoch: 1 [46848/47500]	Loss: 1.1261	LR: 0.097849
Training Epoch: 1 [47104/47500]	Loss: 1.0201	LR: 0.098387
Training Epoch: 1 [47360/47500]	Loss: 1.1250	LR: 0.098925
Training Epoch: 1 [47500/47500]	Loss: 0.7947	LR: 0.099462
Epoch 1 - Average Train Loss: 1.5111, Train Accuracy: 0.4522
Epoch 1 training time consumed: 18.69s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0069, Accuracy: 0.4893, Time consumed:0.89s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_02_August_2025_16h_51m_44s/ResNet18-Cifar10-seed4-ret50-1-best.pth
Valid (Test) Dl:  10000
Train Dl:  50000
Retain Train Dl:  47500
Forget Train Dl:  2500
Retain Valid Dl:  47500
Forget Valid Dl:  2500
retain_prob Distribution: 10000 samples
test_prob Distribution: 10000 samples
forget_prob Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Test Accuracy: 48.662109375
Retain Accuracy: 49.28559875488281
Zero-Retain Forget (ZRF): 0.921710729598999
Membership Inference Attack (MIA): 0.4976
Forget vs Retain Membership Inference Attack (MIA): 0.56
Forget vs Test Membership Inference Attack (MIA): 0.495
Test vs Retain Membership Inference Attack (MIA): 0.536
Train vs Test Membership Inference Attack (MIA): 0.49625
Forget Set Accuracy (Df): 49.922672271728516
Method Execution Time: 909.49 seconds
